Federated Word Vectors

This example demonstrates how word vector model PyTorch could be trained using federated learning with PySyft. We distribute the text data to two workers Bob and Alice to whom the model is sent and trained. Upon training the model the trained model is sent back to the owner of the model and used to make predictions or the embedding layer which consist of learnt word vectors could be used. Federated learning applied to word vectors could be a great way to analyze textual data without knowing the specifics of the text and risk invading privacy. In a real-time application , say understanding internal e-mail culture of a organization. In this example we learn a word embedding by trying to predict the next word given context of N words.

Hrishikesh Kamath - GitHub: @kamathhrishi


In [1]:
#Import modules required for PyTorch Neural Networks

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

from torch.utils.data import Dataset

In [2]:
# Shakespeare Sonnet 2 as text to be learned 

dataset = """When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty's field,
Thy youth's proud livery so gazed on now,
Will be a totter'd weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv'd thy beauty's use,
If thou couldst answer 'This fair child of mine
Shall sum my count, and make my old excuse,'
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel'st it cold.""".split()

In [3]:
class Arguments():
    def __init__(self):
        self.batch_size = 1
        self.test_batch_size = 1000
        self.epochs = 10
        self.lr = 0.01
        self.momentum = 0.5 #<-We currenly do not support momentum
        self.no_cuda = False
        self.seed = 1
        self.log_interval = 10
        self.save_model = False
        self.context_size=3
        self.embedding_dim=10

In [4]:
args=Arguments()

In [5]:
#Define seed to maintain consistency 
torch.manual_seed(args.seed)


Out[5]:
<torch._C.Generator at 0x117c60790>

In [6]:
#Import PySyft library required for federated learning
import syft as sy

In [7]:
#Define Syft workers Bob and Alice for federated learning

hook = sy.TorchHook(torch)  # <-- NEW: hook PyTorch ie add extra functionalities to support Federated Learning
bob = sy.VirtualWorker(hook, id="bob")  # <-- NEW: define remote worker bob
alice = sy.VirtualWorker(hook, id="alice")  # <-- NEW: and alice

In [8]:
#vocabulary of from the corpus
vocab = set(dataset)
word_to_ix = {word: i for i, word in enumerate(vocab)}
ix_to_word={word_to_ix[word]:word for word in word_to_ix}

Torch Dataset

Convert text dataset into a torch dataset instance which we will need to create a federated dataset.


In [9]:
class TextDataset(Dataset):

    def __init__(self,text,transform=None):
        
        """arguments:
        
             text (List of Strings): Text corpus 
             transform: List of transforms to be performed on the input data
             
        """

        self.text = text
        self.data=[]
        self.targets=[]
        self.transform = transform
        
        #Create Trigrams 
        self.create_context()

    def __len__(self):
        
        return len(self.data)
    
    def create_context(self):
        
        '''Function used to seperate target and context words and convert them to torch tensors'''
        
        context=[]
        
        for i in range(len(self.text)-args.context_size):
            
            vec=[]
            
            for j in range(0,args.context_size):
                
                vec.append(self.text[i+j])
                
            context.append([vec,self.text[i+args.context_size]])
                
        
        for words,target in context:
            
            tensor=torch.tensor([word_to_ix[w] for w in words],dtype=torch.long)
            self.data.append(tensor)
            self.targets.append(torch.tensor([word_to_ix[target]], dtype=torch.long))

    def __getitem__(self, idx):
        
        sample=self.data[idx]
        target=self.targets[idx]
                
        if self.transform:
            sample = self.transform(sample)

        return sample,target

Use federated data loader to distribute dataset to workers.


In [10]:
federated_train_loader = sy.FederatedDataLoader( # <-- this is now a FederatedDataLoader 
                         TextDataset(dataset)
                         .federate((bob, alice)),batch_size=args.batch_size)


Scanning and sending data to bob, alice...
Done!

Neural Network Model

Define Neural Network in PyTorch. The network is trained to predict the next word based on given context. Based on the trained model the required embedding is learnt.


In [11]:
class NGramLanguageModeler(nn.Module):

    def __init__(self, vocab_size, embedding_dim, context_size):
        
        super(NGramLanguageModeler, self).__init__()
        self.embeddings = nn.Embedding(vocab_size,embedding_dim)
        self.linear1 = nn.Linear(context_size * embedding_dim, 128)
        self.linear2 = nn.Linear(128, vocab_size)

    def forward(self, inputs):
        
        embeds = self.embeddings(inputs).view((1, -1))
        out = F.relu(self.linear1(embeds))
        out = self.linear2(out)
        log_probs = F.log_softmax(out,dim=1)
        
        return log_probs

In [12]:
loss_function = nn.NLLLoss()
model = NGramLanguageModeler(len(vocab),args.embedding_dim,args.context_size)
optimizer = optim.SGD(model.parameters(), lr=args.lr)

Train Model


In [13]:
def train():
    
    model.train()
    iteration=0
    for context, target in federated_train_loader:
        
        model.send(context.location)
        # Step 1. Prepare the inputs to be passed to the model (i.e, turn the words
        # into integer indices and wrap them in tensors)
        context_idxs = context
    
        # Step 2. Recall that torch *accumulates* gradients. Before passing in a
        # new instance, you need to zero out the gradients from the old
        # instance
        model.zero_grad()

        # Step 3. Run the forward pass, getting log probabilities over next
        # words
        log_probs = model(context_idxs)

        # Step 4. Compute your loss function. (Again, Torch wants the target
        # word wrapped in a tensor)
        loss = loss_function(log_probs,target[0])

        # Step 5. Do the backward pass and update the gradient
        loss.backward()
        optimizer.step()
        model.get()
        
        # Get the Python number from a 1-element Tensor by calling tensor.item()
        # The loss decreased every iteration over the training data!
        iteration+=1
        if(iteration%100==0):
            
            print(loss.get().item())

In [14]:
for epoch in range(0,args.epochs):
    train()
    print("EPOCH: ",epoch+1)


/Users/hrishikesh/anaconda3/envs/syft_1/lib/python3.6/site-packages/syft/frameworks/torch/tensors/interpreters/native.py:215: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
  response = eval(cmd)(*args, **kwargs)
4.5544867515563965
EPOCH:  1
4.0127153396606445
EPOCH:  2
3.4864673614501953
EPOCH:  3
2.9545602798461914
EPOCH:  4
2.4121615886688232
EPOCH:  5
1.876990556716919
EPOCH:  6
1.40726637840271
EPOCH:  7
1.042470932006836
EPOCH:  8
0.7883358001708984
EPOCH:  9
0.6222090721130371
EPOCH:  10

In [15]:
if (args.save_model):
    torch.save(model.state_dict(), "word_vector.pt")

Visualize Results


In [16]:
def SimilarPairs(model,vocab,inverse_vocab):
    
   #Function to compute the most similar pairs
   
   matrix=[]

   for ref_index in range(0,len(vocab)):
    
      Max=-10.0
      Index=0
      
      ref=model.embeddings(torch.LongTensor([ref_index]))
      for i in range(0,len(vocab)):
   
           cos = nn.CosineSimilarity(dim=1, eps=1e-6)
           output = cos(ref,model.embeddings(torch.LongTensor([i])))
            
           if(output.item()>Max and i!=ref_index):
             
             Max=output.item()
             Index=i
                
      matrix.append([ix_to_word[ref_index],ix_to_word[Index],Max])
    
    
   return(matrix)

In [17]:
similar_Pairs=SimilarPairs(model,word_to_ix,ix_to_word)

The word vectors learnt don't exactly capture meanings of actual words since it was trained on a smaller corpora.


In [18]:
#Similar pairs of first 20 words
similar_Pairs[1:20]


Out[18]:
[['use,', 'And', 0.7608605623245239],
 ['more', 'were', 0.9239389896392822],
 ['count,', 'small', 0.597095251083374],
 ['say,', 'within', 0.6586640477180481],
 ['forty', 'mine', 0.6533328890800476],
 ['dig', 'see', 0.7790502309799194],
 ['treasure', 'lies,', 0.5748651027679443],
 ["youth's", 'child', 0.8469692468643188],
 ['mine', 'gazed', 0.7669305801391602],
 ['small', 'thy', 0.7145660519599915],
 ['If', "deserv'd", 0.6726179718971252],
 ['his', 'child', 0.6212238669395447],
 ['days;', 'praise.', 0.8395010232925415],
 ['worth', 'now,', 0.7184102535247803],
 ['sunken', 'shame,', 0.7679064869880676],
 ['held:', "totter'd", 0.6357218027114868],
 ['be', 'sum', 0.780488133430481],
 ['To', 'thine!', 0.7006815075874329],
 ['weed', 'my', 0.802433967590332]]

Well Done!

And voilà! We now are training a real world Learning model using Federated Learning!

Shortcomings of this Example

Of course, there are dozen of improvements we could think of. We would like the computation to operate in parallel on the workers, to update the central model every n batches only, to reduce the number of messages we use to communicate between workers, etc.

On the security side it still has some major shortcomings. Most notably, when we call model.get() and receive the updated model from Bob or Alice, we can actually learn a lot about Bob and Alice's training data by looking at their gradients. We could average the gradient across multiple individuals before uploading it to the central server, like we did in Part 4.

The above embeddings are not useful for practical purposes as they are trained on a very small corpus. Increasing corpus size could lead to more useful embeddings.

Congratulations!!! - Time to Join the Community!

Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!

Star PySyft on GitHub

The easiest way to help our community is just by starring the repositories! This helps raise awareness of the cool tools we're building.

Pick our tutorials on GitHub!

We made really nice tutorials to get a better understanding of what Federated and Privacy-Preserving Learning should look like and how we are building the bricks for this to happen.

Join our Slack!

The best way to keep up to date on the latest advancements is to join our community!

Join a Code Project!

The best way to contribute to our community is to become a code contributor! If you want to start "one off" mini-projects, you can go to PySyft GitHub Issues page and search for issues marked Good First Issue.

If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!